Blind image super-resolution (Blind-SR) aims to recover a high-resolution (HR) image from its corresponding low-resolution (LR) input image with unknown degradations. Most of the existing works design an explicit degradation estimator for each degradation to guide SR. However, it is infeasible to provide concrete labels of multiple degradation combinations (\eg, blur, noise, jpeg compression) to supervise the degradation estimator training. In addition, these special designs for certain degradation, such as blur, impedes the models from being generalized to handle different degradations. To this end, it is necessary to design an implicit degradation estimator that can extract discriminative degradation representation for all degradations without relying on the supervision of degradation ground-truth. In this paper, we propose a Knowledge Distillation based Blind-SR network (KDSR). It consists of a knowledge distillation based implicit degradation estimator network (KD-IDE) and an efficient SR network. To learn the KDSR model, we first train a teacher network: KD-IDE$_{T}$. It takes paired HR and LR patches as inputs and is optimized with the SR network jointly. Then, we further train a student network KD-IDE$_{S}$, which only takes LR images as input and learns to extract the same implicit degradation representation (IDR) as KD-IDE$_{T}$. In addition, to fully use extracted IDR, we design a simple, strong, and efficient IDR based dynamic convolution residual block (IDR-DCRB) to build an SR network. We conduct extensive experiments under classic and real-world degradation settings. The results show that KDSR achieves SOTA performance and can generalize to various degradation processes. The source codes and pre-trained models will be released.
translated by 谷歌翻译
视觉和听力是两种在人类交流和场景理解中起着至关重要的作用的感觉。为了模仿人类的感知能力,旨在开发从音频和视觉方式学习的计算方法的视听学习一直是一个蓬勃发展的领域。预计可以系统地组织和分析视听领域的研究的全面调查。从对视听认知基础的分析开始,我们介绍了几个关键发现,这些发现激发了我们的计算研究。然后,我们系统地回顾了最近的视听学习研究,并将其分为三类:视听,跨模式感知和视听合作。通过我们的分析,我们发现,跨语义,空间和时间支持上述研究的视听数据的一致性。为了重新审视视听学习领域的当前发展,我们进一步提出了关于视听场景理解的新观点,然后讨论和分析视听学习领域的可行未来方向。总体而言,这项调查从不同方面审查并展示了当前视听学习领域。我们希望它可以为研究人员提供对这一领域的更好理解。发布了包括不断更新的调查在内的网站:\ url {https://gewu-lab.github.io/audio-visual-learning/}。
translated by 谷歌翻译
基于CNN的大多数超分辨率(SR)方法假设降解是已知的(\ eg,bicubic)。当降解与假设不同时,这些方法将遭受严重的性能下降。因此,一些方法试图通过多种降解的复杂组合来培训SR网络,以涵盖实际的降解空间。为了适应多个未知降解,引入显式降解估计器实际上可以促进SR性能。然而,以前的显式降解估计方法通常可以通过对地面模糊内核的监督来预测高斯的模糊,并且估计错误可能导致SR失败。因此,有必要设计一种可以提取隐式歧视性降解表示的方法。为此,我们提出了一个基于元学习的区域退化意识SR网络(MRDA),包括元学习网络(MLN),降级提取网络(DEN)和区域退化意识SR Network(RDAN)。为了处理缺乏地面污染的降解,我们使用MLN在几次迭代后快速适应特定的复合物降解并提取隐式降解信息。随后,教师网络MRDA $ _ {T} $旨在进一步利用MLN为SR提取的降解信息。但是,MLN需要在配对的低分辨率(LR)和相应的高分辨率(HR)图像上进行迭代,这在推理阶段不可用。因此,我们采用知识蒸馏(KD)来使学生网络学会直接提取与LR图像的老师相同的隐式退化表示(IDR)。
translated by 谷歌翻译
时空视频超分辨率(STVSR)的目标是增加低分辨率(LR)和低帧速率(LFR)视频的空间分辨率。基于深度学习的最新方法已取得了重大改进,但是其中大多数仅使用两个相邻帧,即短期功能,可以合成缺失的框架嵌入,这无法完全探索连续输入LR帧的信息流。此外,现有的STVSR模型几乎无法明确利用时间上下文以帮助高分辨率(HR)框架重建。为了解决这些问题,在本文中,我们提出了一个称为STDAN的可变形注意网络。首先,我们设计了一个长短的术语特征插值(LSTFI)模块,该模块能够通过双向RNN结构从更相邻的输入帧中挖掘大量的内容,以进行插值。其次,我们提出了一个空间 - 周期性变形特征聚合(STDFA)模块,其中动态视频框架中的空间和时间上下文被自适应地捕获并汇总以增强SR重建。几个数据集的实验结果表明,我们的方法的表现优于最先进的STVSR方法。该代码可在https://github.com/littlewhitesea/stdan上找到。
translated by 谷歌翻译
基于参考的超分辨率(REFSR)在使用外部参考(REF)图像产生现实纹理方面取得了重大进展。然而,现有的REFSR方法可以获得与输入大小一起消耗二次计算资源的高质量对应匹配,限制其应用程序。此外,这些方法通常遭受低分辨率(LR)图像和REF图像之间的比例错位。在本文中,我们提出了一种加速的多尺度聚合网络(AMSA),用于基于参考的超分辨率,包括粗略嵌入式斑块(CFE-PACKPMATCH)和多尺度动态聚合(MSDA)模块。为了提高匹配效率,我们设计一种具有随机样本传播的新型嵌入式PACKMTH方案,其涉及具有渐近线性计算成本的端到端训练到输入大小。为了进一步降低计算成本和加速会聚,我们在构成CFE-PACKMATCH的嵌入式PACKMACTH上应用了粗略策略。为了完全利用跨多个尺度的参考信息并增强稳定性的稳定性,我们开发由动态聚合和多尺度聚合组成的MSDA模块。动态聚合通过动态聚合特征来纠正轻微比例的错位,并且多尺度聚合通过融合多尺度信息来为大规模错位带来鲁棒性。实验结果表明,该拟议的AMSA对定量和定性评估的最先进方法实现了卓越的性能。
translated by 谷歌翻译
非本地注意力(NLA)通过利用自然图像中的内在特征相关性来带来单幅图像超分辨率(SISR)的显着改进。然而,NLA提供嘈杂的信息大量的权重,并且相对于输入大小消耗二次计算资源,限制其性能和应用。在本文中,我们提出了一种新的高效非局部对比度注意(Enca),以执行远程视觉建模并利用更相关的非局部特征。具体而言,Enca由两部分组成,有效的非本地注意力(Enla)和稀疏聚合。 ENLA采用内核方法来近似指数函数并获得线性计算复杂度。对于稀疏聚合,我们通过放大因子乘以专注于信息特征的输入,但近似的方差呈指数增加。因此,应用对比学习以进一步分离相关和无关的特征。为了展示Enca的有效性,我们通过在简单的骨干中添加一些模块来构建称为有效的非本地对比网络(ENLCN)的架构。广泛的实验结果表明,Enlcn对定量和定性评估的最先进方法达到了卓越的性能。
translated by 谷歌翻译
在视觉和声音内利用时间同步和关联是朝向探测物体的强大定位的重要一步。为此,我们提出了一个节省空间内存网络,用于探测视频中的对象本地化。它可以同时通过音频和视觉方式的单模和跨模型表示来同时学习时空关注。我们在定量和定性地展示和分析了在本地化视听物体中结合时空学习的有效性。我们展示了我们的方法通过各种复杂的视听场景概括,最近最先进的方法概括。
translated by 谷歌翻译
A very deep convolutional neural network (CNN) has recently achieved great success for image super-resolution (SR) and offered hierarchical features as well. However, most deep CNN based SR models do not make full use of the hierarchical features from the original low-resolution (LR) images, thereby achieving relatively-low performance. In this paper, we propose a novel residual dense network (RDN) to address this problem in image SR. We fully exploit the hierarchical features from all the convolutional layers. Specifically, we propose residual dense block (RDB) to extract abundant local features via dense connected convolutional layers. RDB further allows direct connections from the state of preceding RDB to all the layers of current RDB, leading to a contiguous memory (CM) mechanism. Local feature fusion in RDB is then used to adaptively learn more effective features from preceding and current local features and stabilizes the training of wider network. After fully obtaining dense local features, we use global feature fusion to jointly and adaptively learn global hierarchical features in a holistic way. Experiments on benchmark datasets with different degradation models show that our RDN achieves favorable performance against state-of-the-art methods.
translated by 谷歌翻译
Given the increasingly intricate forms of partial differential equations (PDEs) in physics and related fields, computationally solving PDEs without analytic solutions inevitably suffers from the trade-off between accuracy and efficiency. Recent advances in neural operators, a kind of mesh-independent neural-network-based PDE solvers, have suggested the dawn of overcoming this challenge. In this emerging direction, Koopman neural operator (KNO) is a representative demonstration and outperforms other state-of-the-art alternatives in terms of accuracy and efficiency. Here we present KoopmanLab, a self-contained and user-friendly PyTorch module of the Koopman neural operator family for solving partial differential equations. Beyond the original version of KNO, we develop multiple new variants of KNO based on different neural network architectures to improve the general applicability of our module. These variants are validated by mesh-independent and long-term prediction experiments implemented on representative PDEs (e.g., the Navier-Stokes equation and the Bateman-Burgers equation) and ERA5 (i.e., one of the largest high-resolution data sets of global-scale climate fields). These demonstrations suggest the potential of KoopmanLab to be considered in diverse applications of partial differential equations.
translated by 谷歌翻译
Humans have internal models of robots (like their physical capabilities), the world (like what will happen next), and their tasks (like a preferred goal). However, human internal models are not always perfect: for example, it is easy to underestimate a robot's inertia. Nevertheless, these models change and improve over time as humans gather more experience. Interestingly, robot actions influence what this experience is, and therefore influence how people's internal models change. In this work we take a step towards enabling robots to understand the influence they have, leverage it to better assist people, and help human models more quickly align with reality. Our key idea is to model the human's learning as a nonlinear dynamical system which evolves the human's internal model given new observations. We formulate a novel optimization problem to infer the human's learning dynamics from demonstrations that naturally exhibit human learning. We then formalize how robots can influence human learning by embedding the human's learning dynamics model into the robot planning problem. Although our formulations provide concrete problem statements, they are intractable to solve in full generality. We contribute an approximation that sacrifices the complexity of the human internal models we can represent, but enables robots to learn the nonlinear dynamics of these internal models. We evaluate our inference and planning methods in a suite of simulated environments and an in-person user study, where a 7DOF robotic arm teaches participants to be better teleoperators. While influencing human learning remains an open problem, our results demonstrate that this influence is possible and can be helpful in real human-robot interaction.
translated by 谷歌翻译